27 research outputs found

    Higher-Order Defeat Without Epistemic Dilemmas

    Get PDF
    Many epistemologists have endorsed a version of the view that rational belief is sensitive to higher-order defeat. That is to say, even a fully rational belief state can be defeated by misleading higher-order evidence, which indicates that the belief state is irrational. In a recent paper, however, Maria Lasonen-Aarnio calls this view into doubt. Her argument proceeds in two stages. First, she argues that higher-order defeat calls for a two-tiered theory of epistemic rationality. Secondly, she argues that there seems to be no satisfactory way of avoiding epistemic dilemmas within a two-tiered framework. Hence, she concludes that the prospects look dim for making sense of higher-order defeat within a broader theoretical picture of epistemic rationality. Here I aim to resist both parts of Lasonen-Aarnio’s challenge. First, I outline a way of accommodating higher-order defeat within a single-tiered framework, by amending epistemic rules with appropriate provisos for different kinds of higher-order defeat. Secondly, I argue that those who nevertheless prefer to accommodate higher-order defeat within a two-tiered framework can do so without admitting to the possibility of epistemic dilemmas, since epistemic rules are not always accompanied by ‘oughts’ in a two-tiered framework. The considerations put forth thus indirectly vindicate the view that rational belief is sensitive to higher-order defeat

    Belief gambles in epistemic decision theory

    Get PDF
    Don’t form beliefs on the basis of coin flips or random guesses. More generally, don’t take belief gambles: if a proposition is no more likely to be true than false given your total body of evidence, don’t go ahead and believe that proposition. Few would deny this seemingly innocuous piece of epistemic advice. But what, exactly, is wrong with taking belief gambles? Philosophers have debated versions of this question at least since the classic dispute between William Clifford and William James near the end of the nineteenth century. Here I reassess the normative standing of belief gambles from the perspective of epistemic decision theory. The main lesson of the paper is a negative one: it turns out that we need to make some surprisingly strong and hard-to-motivate assumptions to establish a general norm against belief gambles within a decision-theoretic framework. I take this to pose a dilemma for epistemic decision theory: it forces us to either make seemingly unmotivated assumptions to secure a norm against belief gambles, or concede that belief gambles can be rational after all

    Reconciling Enkrasia and Higher-Order Defeat

    Get PDF
    Titelbaum Oxford studies in epistemology, 2015) has recently argued that the Enkratic Principle is incompatible with the view that rational belief is sensitive to higher-order defeat. That is to say, if it cannot be rational to have akratic beliefs of the form “p, but I shouldn’t believe that p,” then rational beliefs cannot be defeated by higher-order evidence, which indicates that they are irrational. In this paper, I distinguish two ways of understanding Titelbaum’s argument, and argue that neither version is sound. The first version can be shown to rest on a subtle, but crucial, misconstrual of the Enkratic Principle. The second version can be resisted through careful consideration of cases of higher-order defeat. The upshot is that proponents of the Enkratic Principle are free to maintain that rational belief is sensitive to higher-order defeat

    The Humility Heuristic, or: People Worth Trusting Admit to What They Don’t Know

    Get PDF
    People don't always speak the truth. When they don't, we do better not to trust them. Unfortunately, that's often easier said than done. People don't usually wear a ‘Not to be trusted!’ badge on their sleeves, which lights up every time they depart from the truth. Given this, what can we do to figure out whom to trust, and whom not? My aim in this paper is to offer a partial answer to this question. I propose a heuristic—the “Humility Heuristic”—which is meant to help guide our search for trustworthy advisors. In slogan form, the heuristic says: people worth trusting admit to what they don't know. I give this heuristic a precise probabilistic interpretation, offer a simple argument for it, defend it against some potential worries, and demonstrate its practical worth by showing how it can help address some difficult challenges in the relationship between experts and laypeople

    Higher-Order Defeat and the Impossibility of Self-Misleading Evidence

    Get PDF
    Evidentialism is the thesis, roughly, that one’s beliefs should fit one’s evidence. The enkratic principle is the thesis, roughly, that one’s beliefs should "line up" with one’s beliefs about which beliefs one ought to have. While both theses have seemed attractive to many, they jointly entail the controversial thesis that self-misleading evidence is impossible. That is to say, if evidentialism and the enkratic principle are both true, one’s evidence cannot support certain false beliefs about which beliefs one’s evidence supports. Recently, a number of epistemologists have challenged the thesis that self-misleading evidence is impossible on the grounds that misleading higher-order evidence does not have the kind of strong and systematic defeating force that would be needed to rule out the possibility of such self-misleading evidence. Here I respond to this challenge by proposing an account of higher-order defeat that does, indeed, render self-misleading evidence impossible. Central to the proposal is the idea that higher-order evidence acquires its normative force by influencing which conditional beliefs it is rational to have. What emerges, I argue, is an independently plausible view of higher-order evidence, which has the additional benefit of allowing us to reconcile evidentialism with the enkratic principle

    A Dynamic Solution to the Problem of Logical Omniscience

    Get PDF
    The traditional possible-worlds model of belief describes agents as ‘logically omniscient’ in the sense that they believe all logical consequences of what they believe, including all logical truths. This is widely considered a problem if we want to reason about the epistemic lives of non-ideal agents who—much like ordinary human beings—are logically competent, but not logically omniscient. A popular strategy for avoiding logical omniscience centers around the use of impossible worlds: worlds that, in one way or another, violate the laws of logic. In this paper, we argue that existing impossible-worlds models of belief fail to describe agents who are both logically non-omniscient and logically competent. To model such agents, we argue, we need to ‘dynamize’ the impossible-worlds framework in a way that allows us to capture not only what agents believe, but also what they are able to infer from what they believe. In light of this diagnosis, we go on to develop the formal details of a dynamic impossible-worlds framework, and show that it successfully models agents who are both logically non-omniscient and logically competent

    Bayesianism for Non-ideal Agents

    Get PDF
    Orthodox Bayesianism is a highly idealized theory of how we ought to live our epistemic lives. One of the most widely discussed idealizations is that of logical omniscience: the assumption that an agent’s degrees of belief must be probabilistically coherent to be rational. It is widely agreed that this assumption is problematic if we want to reason about bounded rationality, logical learning, or other aspects of non-ideal epistemic agency. Yet, we still lack a satisfying way to avoid logical omniscience within a Bayesian framework. Some proposals merely replace logical omniscience with a different logical idealization; others sacrifice all traits of logical competence on the altar of logical non-omniscience. We think a better strategy is available: by enriching the Bayesian framework with tools that allow us to capture what agents can and cannot infer given their limited cognitive resources, we can avoid logical omniscience while retaining the idea that rational degrees of belief are in an important way constrained by the laws of probability. In this paper, we offer a formal implementation of this strategy, show how the resulting framework solves the problem of logical omniscience, and compare it to orthodox Bayesianism as we know it

    Explaining the Illusion of Asymmetric Insight

    Get PDF
    People tend to think that they know others better than others know them. This phenomenon is known as the “illusion of asymmetric insight.” While the illusion has been well documented by a series of recent experiments, less has been done to explain it. In this paper, we argue that extant explanations are inadequate because they either get the explanatory direction wrong or fail to accommodate the experimental results in a sufficiently nuanced way. Instead, we propose a new explanation that does not face these problems. The explanation is based on two other well-documented psychological phenomena: the tendency to accommodate ambiguous evidence in a biased way, and the tendency to overestimate how much better we know ourselves than we know others

    An Instrumentalist Account of How to Weigh Epistemic and Practical Reasons for Belief

    Get PDF
    When one has both epistemic and practical reasons for or against some belief, how do these reasons combine into an all-things-considered reason for or against that belief? The question might seem to presuppose the existence of practical reasons for belief. But we can rid the question of this presupposition. Once we do, a highly general ‘Combinatorial Problem’ emerges. The problem has been thought to be intractable due to certain differences in the combinatorial properties of epistemic and practical reasons. Here we bring good news: if we accept an independently motivated version of epistemic instrumentalism—the view that epistemic reasons are a species of instrumental reasons—we can reduce The Combinatorial Problem to the relatively benign problem of how to weigh different instrumental reasons against each other. As an added benefit, the instrumentalist account can explain the apparent intractability of The Combinatorial Problem in terms of a common tendency to think and talk about epistemic reasons in an elliptical manner

    When Conciliation Frustrates the Epistemic Priorities of Groups

    Get PDF
    Our aim in this chapter is to draw attention to what we see as a disturbing feature of conciliationist views of disagreement. Roughly put, the trouble is that conciliatory responses to in-group disagreement can lead to the frustration of a group's epistemic priorities: that is, the group's favoured trade-off between the "Jamesian goals" of truth-seeking and error-avoidance. We show how this problem can arise within a simple belief aggregation framework, and draw some general lessons about when the problem is most pronounced. We close with a tentative proposal for how to solve the problem raised without rejecting conciliationism
    corecore